GSPT-AS-based Neural Network Design

Slides:



Advertisements
Similar presentations
DSP C5000 Chapter 16 Adaptive Filter Implementation Copyright © 2003 Texas Instruments. All rights reserved.
Advertisements

Multi-Layer Perceptron (MLP)
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
also known as the “Perceptron”
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Mehran University of Engineering and Technology, Jamshoro Department of Electronic Engineering Neural Networks Feedforward Networks By Dr. Mukhtiar Ali.
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
Simple Neural Nets For Pattern Classification
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
The back-propagation training algorithm
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Chapter 6: Multilayer Neural Networks
Before we start ADALINE
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio-Medical Engineering Neural Networks Mukhtiar.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology Adaptive nonlinear manifolds and their applications to pattern.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
DSP C5000 Chapter 16 Adaptive Filter Implementation Copyright © 2003 Texas Instruments. All rights reserved.
Non-Bayes classifiers. Linear discriminants, neural networks.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Evolutionary Computation Evolving Neural Network Topologies.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Neural Networks - Berrin Yanıkoğlu1 MLP & Backpropagation Issues.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Neural networks.
Multiple-Layer Networks and Backpropagation Algorithms
Neural Network Architecture Session 2
Chapter 16 Adaptive Filter Implementation
Derivation of a Learning Rule for Perceptrons
Adaptation Behavior of Pipelined Adaptive Filters
Ch 2. Concept Map ⊂ ⊂ Single Layer Perceptron = McCulloch – Pitts Type Learning starts in Ch 2 Architecture, Learning Adaline : Linear Learning.
Neural Networks: Improving Performance in X-ray Lithography Applications ECE 539 Ryan T. Hogg May 10, 2000.
Dr. Unnikrishnan P.C. Professor, EEE
CS621: Artificial Intelligence
Machine Learning Today: Reading: Maria Florina Balcan
Solution of Equations by Iteration
Biological and Artificial Neuron
Biological and Artificial Neuron
Training a Neural Network
Artificial Neural Network & Backpropagation Algorithm
Synaptic DynamicsII : Supervised Learning
of the Artificial Neural Networks.
Biological and Artificial Neuron
Multilayer Perceptron & Backpropagation
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Chapter - 3 Single Layer Percetron
Scalable light field coding using weighted binary images
Presentation transcript:

GSPT-AS-based Neural Network Design Good morning, ladies and gentlemen, today, the topic I would like to talk about is the GSPT-AS-based neural network design. Presenter: Kuan-Hung Chen Adviser: Tzi-Dar Chiueh October 13, 2003

Outline Motivation GSPT-AS LMS Algorithm Power Amplifier Model Predistortor Architecture Simulation Results and Complexity Analysis Conclusions To begin with, I will talk about the motivation briefly. Then, the GSPT-AS LMS algorithm will be introduced. After introducing the GSPT-AS LMS algorithm, the power amplifier model and the predistortor architecture used for simulation will be presented. The simulation results and the complexity analysis will be shown. And finally, a brief conclusion.

Motivation Initial simulation results show that the GSPT-based neural network cannot converge. The reason is that the magnitude of all weights will have approximately the same order if only the sign of the updating term is taken for weight updating. So, it is reasonable that the magnitude of the updating term should be taken into account. It is straightforward to apply the GSPT-AS LMS algorithm, that takes the magnitude of the updating term into account, to the weight updating in the neural network. Based on my initial simulation results on the GSPT-based neural network predistortor, it seems that the GSPT-based neural network cannot converge. I think the reason is that the magnitude of all weights will have approximately the same order if only the sign of the updating term is taken for weight updating. So, it seems that the magnitude of the updating term should be taken account into weight updating. And, then, it is straightforward to apply the GSPT-AS LMS algorithm, that takes the magnitude of the updating term into account, to the weight updating in the neural network.

Basic Structure of an LMS Adaptive Filter Before introducing the GSPT-AS LMS algorithm, we first brief review the structure of an LMS adaptive filter. The output y[n] is the summation of wi[n] multiplied by x[n-I]. And this part, linear filter is used to generate the output y[n]. The error signal, e[n], is calculated by d[n] minus y[n]. The weight is updated according to this equation and can be implemented with this structure.  multiplied by e[n], then multiplied by x[n-i], and added to the old coefficient.

GSPT LMS Algorithm Reduce the complexity of both linear filter and the coefficient updating block in an adaptive filter As I have said before, the GSPT LMS algorithm can be used to reduce the complexity of both linear filter and the coefficient updating block in an adaptive filter. In the GSPT LMS algorithm, the output, y[n], and the error signal, e[n], are calculated by the same way in the LMS algorithm. However, each coefficient is increased or decreased according to the sign of the updating term in the LMS algorithm. Note that, the complexity of linear filter is reduced by using the GSPT number system for coefficient representation while the complexity of the coefficient updating block is reduced by using the GSPT LMS algorithm.

GSPT-AS LMS Algorithm Q(z) represents a power-of-2 value which is closest to z but not larger than z and g is the group size. The GSPT-AS LMS is originally proposed to improve the convergence speed by updating coefficients more precisely. The output, y[n], and the error signal, e[n], are still calculated by the same way as in the GSPT LMS algorithm. However, the coefficient is updated by the following scheme: First, we round e[n] and x[n-k] to power-of-2 values via the function Q(y), that represents a power-of-2 value which is closest to y but not larger than y. Then the updating term is calculated. If the magnitude of the updating term is too small, we hold the coefficient. If not, we select the proper group in the GSPT number presentation and set up the carryin or borrowin signal of the updating unit corresponds to that group.

Coefficient Updater for GSPT-AS LMS Based on the magnitude of the updating term, we choose the proper updating unit to receive the carryin/borrowin signal. The idea can be explained more clearly with this slide. The updating term decision block rounds inputs to power-of-2 values, calculates the updating term, and determines which updating unit should receive the carryin or borrowin signal.

Power Amplifier Model To simulate a solid-state power amplifier, the following model is used for the AM/AM conversion: The AM/PM conversion of a solid-state power amplifier is small enough to be neglected. A good approximation of existing amplifiers is obtained by choosing p in the range of 2 to 3. Now, I will introduce the power amplifier model used in my simulations. To simulate a solid-state power amplifier, the following model is used for the AM/AM conversion: The AM/PM conversion of a solid-state power amplifier is small enough to be neglected. And, a good approximation of existing amplifiers is obtained by choosing p in the range of 2 to 3.

Transfer Function of AM/AM Conversion This figure shows the normalized transfer functions of AM/AM conversion under different values of p. You can see that the distortion is quite severe when the input amplitude is greater than 1.

64-QAM Constellations Distorted by PA Model This figure shows the 64-QAM constellations distorted by the PA model just presented.

Predistortor Architecture Since only the AM-to-AM conversion is considered in this PA model, the learning architecture and the predistortion architecture shown in this slide are used. During the training, the magnitude of the input constellation is regarded as the desire signal. The magnitude of the distorted constellation is taken as the input of the neural network. The difference between the neural network’s output is used to train the neural network. After the training is completed, the magnitude of the input constellation is predistorted by the trained neural network. The output of the neural network and the phase of the input constellation are then translated to rectangular coordinates and sent to PA.

Neural Network Structure The neural network structure used is a MLP with one hidden layer. The input layer has 1 neuron and 1 bias neuron. The hidden layer has 10 neurons and 1 bias neuron. The output layer has 1 neuron. The neural network structure used in the predistortor is a MLP with one hidden layer. The input layer has 1 neuron and 1 bias neuron. The hidden layer has 10 neurons and 1 bias neuron. And, the output layer has only one neuron. The nonlinear function is used only for neurons in the hidden layer except the bias neuron.

Backpropagation Algorithm Now, let’s review the backpropagation algorithm used to train the neural network. The error signal, e[n], is calculated by d[n], the desire signal, minus o[n], the output of the neural network. e[n] is used to update the weights corresponds to interconnections from the hidden layer to the output layer. Also, it is used to calculate k[n], which are used to update the weights corresponds to interconnections from the input layer to the hidden layer.

GSPT-AS-based Backpropagation Algorithm Let Q(z) represent a power-of-2 value which is closest to z but not larger than z. Applying the GSPT-AS LMS algorithm to the backpropagation algorithm, the following GSPT-AS-based backpropagation algorithm is derived. All terms required to calculate e[n] and k[n] are rounded to power-of-2 values first.

Simulation Results (1) This figure shows the mean square error between the original constellations and the output constellations of PA with predistortion for three schemes. Floating-point traditional neural network predistortor, fixed-point traditional neural network predistortor, and the GSPT-AS-based neural network predistortor. The floating-point scheme can achieve the lowest mean square error but converges slower than other two schemes. On the contrary, the GSPT-AS-based scheme converges fastest among three schemes but has the worst MSE performance.

Simulation Results (2) This figure shows the mean square error of the first 2 million iterations. You can see that the fluctuating phenomenon of the GSPT-AS-based scheme is much severe than other two schemes. And it is caused by the inherent property of the GSPT-AS LMS algorithm.

64-QAM Constellation with GSPT-AS-based Predistortion The 64-QAM constellations with GSPT-AS-based predistortion is shown here. The constellations with predistortion are denoted by circles and the constellations without predistortion are denoted by x. You can see that the performance is quite good except for the four corner constellations.

Floating-Point Scheme vs. GSPT-AS-based Scheme This slide compares the performance between the floating-point scheme and the GSPT-AS-based scheme. There is almost no difference between these two schemes.

Complexity Analysis N: The number of neurons in the hidden layer. Fixed-point GSPT-AS Output calculation Multiplication 2  N Addition 2  N + 1 f() N ----------------------------------------------------------------------- Weight updating 5  N 4  N + 1 Power-of-2 Addition Round-to-power-of-2 3  N + 2 GSPT-AS coeff. updater 3  N + 1 In this slide, the initial complexity analysis result is shown. The operations required for two schemes to calculate the neural network’s output is the same. However, the GSPT-AS-based scheme has lower complexity since the GSPT number system is used for weights representation. For weight updating, in the GSPT-AS-based scheme, the multiplications required are simplified to additions of power-of-2 values and operations required to round some signals to a power-of-values. And some additions are replaced by the GSPT-AS coefficient updaters. Since the complexity of a multiplier is quite larger than these two operations, we can conclude that the GSPT-AS-based scheme has much lower complexity than fixed-point scheme in weight updating part.

Conclusions A low-complexity GSPT-AS-based neural network predistortor for nonlinear PA has been designed and simulated. Simulation results show that the GSPT-AS-based neural network predistortor can achieve very close performance to the floating-point neural network predistortor with much lower complexity. A low-complexity GSPT-AS-based neural network predistortor for nonlinear PA has been designed and simulated. Simulation results and complexity analysis results show that the GSPT-AS-based neural network predistortor can achieve very close performance to the floating-point neural network predistortor with much lower complexity.

Reference C. N. Chen, K. H. Chen, and T. D. Chiueh, “Algorithm and Architecture Design for a Low-Complexity Adaptive Equalizer,” in Proc. of IEEE ISCAS ‘03, 2003, pp. 304-307. R. V. Nee and R. Prasad, OFDM Wireless Multimedia Communications, Artech House, 2000. F. Langlet, H. Abdulkader, D. Roviras, A. Mallet, and F. Castanié, “Adaptive Predistortion for Solid State Power Amplifier using Multi-layer Perceptron,” GLOBECOM ’01. IEEE, Vol. 1, 25-29 Nov. 2001, pp. 325-329.