Presentation is loading. Please wait.

Presentation is loading. Please wait.

Radial Basis Function Networks:

Similar presentations


Presentation on theme: "Radial Basis Function Networks:"— Presentation transcript:

1 Radial Basis Function Networks:
A powerful alternative of Multilayer Perceptron Networks. An RBF network is essentially a three layer (i.e. one hidden layer) feedforward network. The first layer consists of a number of units clamped to the input vector. The hidden layer is composed of units, each having an over all response function (activation function), usually a Gaussian as defined below: (1) Where is the input vecor, is the centre of the ith RBF and j2 is its variance.

2 The third layer computes the output function for each class as follows:
(2) Where M is the number of RBFs and wj is the weight associated with jth RBF.

3 RBF versus MLP Networks
An RBF (in most applications) is a single hidden layer network, whereas an MLP network may consist of one or more hidden layers. All individual neurons in a hidden layer and in output layer may share a common neuron model. On the other hand, the neurons in the hidden layer of an RBF network are quite different and serve a different purpose from those in the output layer of the network.

4 RBF versus MLP Networks
The activation function of each neuron in a hidden layer of an MLP network computes the inner product of the input vector and the synaptic weight vector of that unit. On the other hand, the argument of the activation function of each hidden unit in an RBF network computes the Euclidean norm (distance) between the input vector and the centre of that unit. The hidden unit of an RBF network is nonlinear and the output unit is always linear. The hidden unit of an MLP network is also nonlinear, however, the output layer can be linear or nonlinear.

5 RBF versus MLP Networks
MLPs construct global approximations to nonlinear input output mapping and are therefore capable of generalization in regions of the input space where little or no training data are available. On the other hand, RBF networks construct local approximations to non-linear input output mappings and are therefore capable of fast learning and reduced sensitivity to the order of presentation of training data.

6 Training of RBF Networks:
A number of approaches to training RBF networks are available in the literature. Most of these can be divided into two stages: The first stage involves the determination of an appropriate set of RBF centres and widths and the second stage deals with the determination of the connection weights from the hidden layer to the output layer. The selection of the RBF centres is the most crucial problem in designing the RBF network. These should be located according to the demand of the system to be trained. One popular algorithm for choosing an optimal set of RBF centres is the Orthogonal Least Squares Method. This method was developed by Chen et al. and is implemented in Neural Network toolbox of MATLAB as the function newrb.m


Download ppt "Radial Basis Function Networks:"

Similar presentations


Ads by Google